automatic assessment
Automatic Assessment of Students' Classroom Engagement with Bias Mitigated Multi-task Model
Thiering, James, Krishna, Tarun Sethupat Radha, Zelkin, Dylan, Biswas, Ashis Kumer
With the rise of online and virtual learning, monitoring and enhancing student engagement have become an important aspect of effective education. Traditional methods of assessing a student's involvement might not be applicable directly to virtual environments. In this study, we focused on this problem and addressed the need to develop an automated system to detect student engagement levels during online learning. We proposed a novel training method which can discourage a model from leveraging sensitive features like gender for its predictions. The proposed method offers benefits not only in the enforcement of ethical standards, but also to enhance interpretability of the model predictions. We applied an attribute-orthogonal regularization technique to a split-model classifier, which uses multiple transfer learning strategies to achieve effective results in reducing disparity in the distribution of prediction for sensitivity groups from a Pearson correlation coefficient of 0.897 for the unmitigated model, to 0.999 for the mitigated model. The source code for this project is available on https://github.com/ashiskb/elearning-engagement-study .
- North America > United States > Colorado > Denver County > Denver (0.15)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > Middle East > Israel (0.04)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.88)
Automatic Assessment of Divergent Thinking in Chinese Language with TransDis: A Transformer-Based Language Model Approach
Yang, Tianchen, Zhang, Qifan, Sun, Zhaoyang, Hou, Yubo
Language models have been increasingly popular for automatic creativity assessment, generating semantic distances to objectively measure the quality of creative ideas. However, there is currently a lack of an automatic assessment system for evaluating creative ideas in the Chinese language. To address this gap, we developed TransDis, a scoring system using transformer-based language models, capable of providing valid originality (quality) and flexibility (variety) scores for Alternative Uses Task (AUT) responses in Chinese. Study 1 demonstrated that the latent model-rated originality factor, comprised of three transformer-based models, strongly predicted human originality ratings, and the model-rated flexibility strongly correlated with human flexibility ratings as well. Criterion validity analyses indicated that model-rated originality and flexibility positively correlated to other creativity measures, demonstrating similar validity to human ratings. Study 2 & 3 showed that TransDis effectively distinguished participants instructed to provide creative vs. common uses (Study 2) and participants instructed to generate ideas in a flexible vs. persistent way (Study 3). Our findings suggest that TransDis can be a reliable and low-cost tool for measuring idea originality and flexibility in Chinese language, potentially paving the way for automatic creativity assessment in other languages. We offer an open platform to compute originality and flexibility for AUT responses in Chinese and over 50 other languages (https://osf.io/59jv2/).
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > New Jersey (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Research Report > Promising Solution (0.86)
Automatic Assessment of OCR Quality in Historical Documents
Gupta, Anshul (Texas A&M University) | Gutierrez-Osuna, Ricardo (Texas A&M University) | Christy, Matthew (Texas A&M University) | Capitanu, Boris (University of Illinois at Urbana-Champaign) | Auvil, Loretta (University of Illinois at Urbana-Champaign) | Grumbach, Liz (Texas A&M University) | Furuta, Richard (Texas A&M University) | Mandell, Laura (Texas A&M University)
Mass digitization of historical documents is a challenging problem for optical character recognition (OCR) tools. Issues include noisy backgrounds and faded text due to aging, border/marginal noise, bleed-through, skewing, warping, as well as irregular fonts and page layouts. As a result, OCR tools often produce a large number of spurious bounding boxes (BBs) in addition to those that correspond to words in the document. This paper presents an iterative classification algorithm to automatically label BBs (i.e., as text or noise) based on their spatial distribution and geometry. The approach uses a rule-base classifier to generate initial text/noise labels for each BB, followed by an iterative classifier that refines the initial labels by incorporating local information to each BB, its spatial location, shape and size. When evaluated on a dataset containing over 72,000 manually-labeled BBs from 159 historical documents, the algorithm can classify BBs with 0.95 precision and 0.96 recall. Further evaluation on a collection of 6,775 documents with ground-truth transcriptions shows that the algorithm can also be used to predict document quality (0.7 correlation) and improve OCR transcriptions in 85% of the cases.
- North America > United States > Illinois (0.05)
- North America > United States > Texas > Brazos County > College Station (0.04)
An AI Framework for the Automatic Assessment of e-Government Forms
Chun, Andy Hon Wai (City University of Hong Kong)
This article describes the architecture and AI technology behind an XML-based AI framework designed to streamline e-government form processing. The framework performs several crucial assessment and decision support functions, including workflow case assignment, automatic assessment, follow-up action generation, precedent case retrieval, and learning of current practices. To implement these services, several AI techniques were used, including rule-based processing, schema-based reasoning, AI clustering, case-based reasoning, data mining, and machine learning. The primary objective of using AI for e-government form processing is of course to provide faster and higher quality service as well as ensure that all forms are processed fairly and accurately.